-
Notifications
You must be signed in to change notification settings - Fork 6
merge into dev #130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
merge into dev #130
Conversation
…lubility regression
…tments for regression tasks and classification tasks
|
|
add loading from checkpoint pretrained model fix |
|
I added some comments. It would be great if you could have a look at them. Also, you have added quite a number of config files. Some seem to be very specific (e.g. an ELECTRA config with a different learning rate for a specific experiment). My suggestion would be to either remove those configs (and publish it in a paper-specific zenodo archive or mention the parameters in the paper) or group them so that new users don't get overwhelmed (e.g. all moleculenet dataset configs could be one folder). |
chebai/loss/semantic.py
Outdated
| use_sigmoidal_implication: bool = False, | ||
| weight_epoch_dependent: Union[bool | tuple[int, int]] = False, | ||
| weight_epoch_dependent: Union[bool, Tuple[int, int]] = False, | ||
| weight_epoch_dependent: Union[bool, Tuple[int, int]] = False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why does weight_epoch_dependent appear twice here?
chebai/models/base.py
Outdated
| if self.pass_loss_kwargs: | ||
| loss_kwargs = loss_kwargs_candidates | ||
| loss_kwargs["current_epoch"] = self.trainer.current_epoch | ||
| # loss_kwargs["current_epoch"] = self.trainer.current_epoch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this commented out? Afaik we don't have any loss function at the moment that needs this (this was added for some experimental semantic loss features that didn't perform well). Does this break anything?
chebai/models/electra.py
Outdated
|
|
||
| from chebai.loss.semantic import DisjointLoss as ElectraChEBIDisjointLoss # noqa | ||
| # TODO: put back in before pull request | ||
| # from chebai.loss.semantic import DisjointLoss as ElectraChEBIDisjointLoss # noqa |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i guess you wanted to uncomment this :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this will be a problem for merging. I have added new smiles tokens on a different branch (from pubchem) so the new pubchem-pretrained model (and all models based on that) will depend on those tokens.
Are the tokens you added here actually used by a model or are those just artifacts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have removed the part in question and will open an issue and look into what is going on with this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a reason for deleting this file?
restructering of config files fixing small issues from merging
|
addressed all comments |
No description provided.